Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Urol Int ; 2024 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-38555637

RESUMO

INTRODUCTION: This study assessed the potential of Large Language Models (LLMs) as educational tools by evaluating their accuracy in answering questions across urological subtopics. METHODS: Three LLMs (ChatGPT-3.5, ChatGPT-4, and Bing AI) were examined in two testing rounds, separated by 48-hours, using 100 Multiple-Choice Questions (MCQs) from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA), covering five different subtopics. The correct answer was defined as "formal accuracy" (FA) representing the designated single best answer (SBA) among four options. Alternative answers selected from LLMs, which may not necessarily be the SBA but are still deemed correct, were labeled as "extended accuracy" (EA). Their capacity to enhance the overall accuracy rate when combined with FA was examined. RESULTS: In two rounds of testing, the FAs scores were achieved as follows: ChatGPT-3.5: 58% and 62%, ChatGPT-4: 63% and 77%, and BING AI: 81% and 73%. The incorporation of EA did not yield a significant enhancement in overall performance. The achieved gains for ChatGPT-3.5, ChatGPT-4, and BING AI were as a result 7% and 5%, 5% and 2%, and 3% and 1%, respectively (p>0.3). Within urological subtopics, LLMs showcased best performance in Pediatrics/Congenital and comparatively less effectiveness in Functional/BPS/Incontinence. CONCLUSION: LLMs exhibit suboptimal urology knowledge and unsatisfactory proficiency for educational purposes. The overall accuracy did not significantly improve when combining EA to FA. The error rates remained high ranging from 16 to 35%. Proficiency levels vary substantially across subtopics. Further development of medicine specific LLMs is required before integration into urological training programs.

2.
Cancers (Basel) ; 16(4)2024 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-38398144

RESUMO

Optimal urine-based diagnostic tests (UBDT) minimize unnecessary follow-up cystoscopies in patients with non-muscle-invasive bladder-cancer (NMIBC), while accurately detecting high-grade bladder-cancer without false-negative results. Such UBDTs have not been comprehensively described upon a broad, validated dataset, resulting in cautious guideline recommendations. Uromonitor®, a urine-based DNA-assay detecting hotspot alterations in TERT, FGFR3, and KRAS, shows promising initial results. However, a systematic review merging all available data is lacking. Studies investigating the diagnostic performance of Uromonitor® in NMIBC until November 2023 were identified in PubMed, Embase, Web-of-Science, Cochrane, Scopus, and medRxiv databases. Within aggregated analyses, test performance and area under the curve/AUC were calculated. This project fully implemented the PRISMA statement. Four qualifying studies comprised a total of 1190 urinary tests (bladder-cancer prevalence: 14.9%). Based on comprehensive analyses, sensitivity, specificity, positive-predictive value/PPV, negative-predictive value/NPV, and test accuracy of Uromonitor® were 80.2%, 96.9%, 82.1%, 96.6%, and 94.5%, respectively, with an AUC of 0.886 (95%-CI: 0.851-0.921). In a meta-analysis of two studies comparing test performance with urinary cytology, Uromonitor® significantly outperformed urinary cytology in sensitivity, PPV, and test accuracy, while no significant differences were observed for specificity and NPV. This systematic review supports the use of Uromonitor® considering its favorable diagnostic performance. In a cohort of 1000 patients with a bladder-cancer prevalence of ~15%, this UBDT would avert 825 unnecessary cystoscopies (true-negatives) while missing 30 bladder-cancer cases (false-negatives). Due to currently limited aggregated data from only four studies with heterogeneous quality, confirmatory studies are needed.

3.
World J Urol ; 42(1): 20, 2024 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-38197996

RESUMO

PURPOSE: This study is a comparative analysis of three Large Language Models (LLMs) evaluating their rate of correct answers (RoCA) and the reliability of generated answers on a set of urological knowledge-based questions spanning different levels of complexity. METHODS: ChatGPT-3.5, ChatGPT-4, and Bing AI underwent two testing rounds, with a 48-h gap in between, using the 100 multiple-choice questions from the 2022 European Board of Urology (EBU) In-Service Assessment (ISA). For conflicting responses, an additional consensus round was conducted to establish conclusive answers. RoCA was compared across various question complexities. Ten weeks after the consensus round, a subsequent testing round was conducted to assess potential knowledge gain and improvement in RoCA, respectively. RESULTS: Over three testing rounds, ChatGPT-3.5 achieved RoCa scores of 58%, 62%, and 59%. In contrast, ChatGPT-4 achieved RoCA scores of 63%, 77%, and 77%, while Bing AI yielded scores of 81%, 73%, and 77%, respectively. Agreement rates between rounds 1 and 2 were 84% (κ = 0.67, p < 0.001) for ChatGPT-3.5, 74% (κ = 0.40, p < 0.001) for ChatGPT-4, and 76% (κ = 0.33, p < 0.001) for BING AI. In the consensus round, ChatGPT-4 and Bing AI significantly outperformed ChatGPT-3.5 (77% and 77% vs. 59%, both p = 0.010). All LLMs demonstrated decreasing RoCA scores with increasing question complexity (p < 0.001). In the fourth round, no significant improvement in RoCA was observed across all three LLMs. CONCLUSIONS: The performance of the tested LLMs in addressing urological specialist inquiries warrants further refinement. Moreover, the deficiency in response reliability contributes to existing challenges related to their current utility for educational purposes.


Assuntos
Inteligência Artificial , Urologia , Humanos , Reprodutibilidade dos Testes , Exame Físico , Idioma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...